13 research outputs found
Reversible Jump Metropolis Light Transport using Inverse Mappings
We study Markov Chain Monte Carlo (MCMC) methods operating in primary sample
space and their interactions with multiple sampling techniques. We observe that
incorporating the sampling technique into the state of the Markov Chain, as
done in Multiplexed Metropolis Light Transport (MMLT), impedes the ability of
the chain to properly explore the path space, as transitions between sampling
techniques lead to disruptive alterations of path samples. To address this
issue, we reformulate Multiplexed MLT in the Reversible Jump MCMC framework
(RJMCMC) and introduce inverse sampling techniques that turn light paths into
the random numbers that would produce them. This allows us to formulate a novel
perturbation that can locally transition between sampling techniques without
changing the geometry of the path, and we derive the correct acceptance
probability using RJMCMC. We investigate how to generalize this concept to
non-invertible sampling techniques commonly found in practice, and introduce
probabilistic inverses that extend our perturbation to cover most sampling
methods found in light transport simulations. Our theory reconciles the
inverses with RJMCMC yielding an unbiased algorithm, which we call Reversible
Jump MLT (RJMLT). We verify the correctness of our implementation in canonical
and practical scenarios and demonstrate improved temporal coherence, decrease
in structured artifacts, and faster convergence on a wide variety of scenes
A Generalized Ray Formulation For Wave-Optics Rendering
Under ray-optical light transport, the classical ray serves as a local and
linear "point query" of light's behaviour. Such point queries are useful, and
sophisticated path tracing and sampling techniques enable efficiently computing
solutions to light transport problems in complex, real-world settings and
environments. However, such formulations are firmly confined to the realm of
ray optics, while many applications of interest, in computer graphics and
computational optics, demand a more precise understanding of light. We
rigorously formulate the generalized ray, which enables local and linear point
queries of the wave-optical phase space. Furthermore, we present sample-solve:
a simple method that serves as a novel link between path tracing and
computational optics. We will show that this link enables the application of
modern path tracing techniques for wave-optical rendering, improving upon the
state-of-the-art in terms of the generality and accuracy of the formalism, ease
of application, as well as performance. Sampling using generalized rays enables
interactive rendering under rigorous wave optics, with orders-of-magnitude
faster performance compared to existing techniques.Comment: For additional information, see
https://ssteinberg.xyz/2023/03/27/rtplt
A radiative transfer framework for non-exponential media
We develop a new theory of volumetric light transport for media with non-exponential free-flight distributions. Recent insights from atmospheric sciences and neutron transport demonstrate that such distributions arise in the presence of correlated scatterers, which are naturally produced by processes such as cloud condensation and fractal-pattern formation. Our theory accommodates correlations by disentangling the concepts of the free-flight distribution and transmittance, which are equivalent when scatterers are statistically independent, but become distinct when correlations are present. Our theory results in a generalized path integral which allows us to handle non-exponential media using the full range of Monte Carlo rendering algorithms while enriching the range of achievable appearance. We propose parametric models for controlling the statistical correlations by leveraging work on stochastic processes, and we develop a method to combine such unresolved correlations (and the resulting non-exponential free-flight behavior) with explicitly modeled macroscopic heterogeneity. This provides a powerful authoring approach where artists can freely design the shape of the attenuation profile separately from the macroscopic heterogeneous density, while our theory provides a physically consistent interpretation in terms of a path space integral. We address important considerations for graphics including energy conservation, reciprocity, and bidirectional rendering algorithms, all in the presence of surfaces and correlated media
Correlations and Reuse for Fast and Accurate Physically Based Light Transport
Light transport is the study of the transfer of light between emitters, surfaces, media and sensors. Fast simulations of light transport play a pivotal role in photo-realistic image synthesis, and find many applications today, including predictive manufacturing, machine learning, scientific visualization and the movie industry. In order to accurately reproduce the appearance of real scenes, light transport must closely approximate the physical laws governing the flow of light. Physically based rendering is a set of principles for codifying these laws into a mathematical model, and is the predominant rendering methodology today.
The representational power of this model is limited to the effects it chooses to capture. Simultaneously, simulating the model is an additional source of approximation error: The predominant solution framework in use today—Monte Carlo integration—produces the exact image predicted by the model typically only in the limit of infinite computation; at any finite time, an image contaminated with noise is obtained.
In this dissertation, we are concerned with improving the accuracy of physically based light transport. We achieve this both by improving the representational power of the model, and by making the rendering algorithms more efficient, leading to lower error at any given computational budget. In particular, we will investigate correlations and reuse: On the one hand, prevalent models in rendering assume natural processes to arise from random, independent events, and simulate them as such. We will show that for participating media—such as clouds, fog, or smoke—this assumption does not hold, and we introduce an augmented model that can faithfully represent such correlations. On the other hand, the types of solutions that satisfy the rendering problem show a great deal of correlation. Because all pixels in an image view the same scene, the mathematical problems to be solved are greatly interrelated. Where naive rendering algorithms treat each pixel in isolation, we will focus on reusing the same computation over many pixels, exploiting the natural correlations present and thus amortizing computational effort. We improve over prior work by leveraging additional insights about the structure of the rendering problem to allow a greater amount of reuse, and thus efficienc
Bilateral Regularization in Reproducing Kernel Hilbert Spaces for Discontinuity Preserving Image Registration
The registration of abdominal images is an increasing field in research and forms the basis for studying the dynamic motion of organs. Particularly challenging therein are organs which slide along each other. They require discontinuous transform mappings at the sliding boundaries to be accurately aligned. In this paper, we present a novel approach for discontinuity preserving image registration. We base our method on a sparse kernel machine (SKM), where reproducing kernel Hilbert spaces serve as transformation models. We introduce a bilateral regularization term, where neighboring transform parameters are considered jointly. This regularizer enforces a bias to homogeneous regions in the transform mapping and simultaneously preserves discontinuous magnitude changes of parameter vectors pointing in equal directions. We prove a representer theorem for the overall cost function including this bilateral regularizer in order to guarantee a finite dimensional solution. In addition, we build direction-dependent basis functions within the SKM framework in order to elongate the transformations along the potential sliding organ boundaries. In the experiments, we evaluate the registration results of our method on a 4DCT dataset and show superior registration performance of our method over the tested methods
Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings
We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state-of-the-art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empirical point of view, relating the strengths and limitations of their corresponding components with an emphasis on production requirements. The observations of our analysis instruct the design of our new filter that offers high-quality results and stable performance. A key observation of our analysis is that using auxiliary buffers (normal, albedo, etc.) to compute the regression weights greatly improves the robustness of zero-order models, but can be detrimental to first-order models. Consequently, our filter performs a first-order regression leveraging a rich set of auxiliary buffers only when fitting the data, and, unlike recent works, considers the pixel color alone when computing the regression weights. We further improve the quality of our output by using a collaborative denoising scheme. Lastly, we introduce a general mean squared error estimator, which can handle the collaborative nature of our filter and its nonlinear weights, to automatically set the bandwidth of our regression kernel